148 research outputs found

    Human expert supervised selection of time-frequency intervals in EEG signals for brain–computer interfacing

    No full text
    International audienceIn the context of brain–computer interfacing based on motor imagery, we propose a method allowing a human expert to supervise the selection of user-specific time-frequency features computed from EEG signals. Indeed, in the current state of BCI research, there is always at least one expert involved in the first stages of any experimentation. On one hand, such experts really appreciate keeping a certain level of control on the tuning of user-specific parameters. On the other hand, we will show that their knowledge is extremely valuable for selecting a sparse set of significant time-frequency features. The expert selects these features through a visual analysis of curves highlighting differences between electroencephalographic activities recorded during the execution of various motor imagery tasks. We compare our method to the basic common spatial patterns approach and to two fully-automatic feature extraction methods, using dataset 2A of BCI competition IV. Our method (mean accuracy m = 83.71 ± 14.6 std) outperforms the best competing method (m = 79.48 ± 12.41 std) for 6 of the 9 subjects

    SSVEP-based BCIs: study of classifier stability over time and effects of human learning on classification accuracy

    No full text
    International audienceBrain-computer interfaces (BCI) based on steady-state visual evoked potentials (SSVEP) enable a user to control an application by focusing his/her attention on visual stimuli blinking at specific frequencies. This technique of interaction can enable people suffering from severe motor disabilities to improve their quality of life through regaining a partial autonomy. According to literature, each usage session of a SSVEP-based BCI integrates a calibration phase aimed in particular at computing classifier's parameters. Our objective is to evaluate if the same parameters could be used during several sessions, in order to avoid performing systematically a calibration phase, which is very restrictive for the user. To do so, we analyze stability of classification results over time. On the other hand, the data acquired during our experiments were used to study the possible effects of human learning on interface performance and to confirm or not the state of the art knowledge on this subject. According to literature, SSVEP-based BCIs work well from the first use and their performances do not improve with subject's experience

    The Pervasive Fridge. A smart computer system against uneaten food loss

    Get PDF
    International audienceFood waste or food loss is food that is discarded or lost uneaten. The work presented in this paper is related to our researches in the field of pervasive and ubiquitous computing. Our "Pervasive Fridge" prototype allows users to be notified proactively, when a food arrives to its expiration date. Speech and image recognition are also integrated in our prototype. This system combines various resources in order to scan barcode, identify and store data related to products, with a smartphone. Later, notifications are sent freely to consumers by mail, SMS (with no charge) and pop-up, to avoid uneaten food loss

    Contribution Ă  l'Ă©tude du dialogue Homme-Machine Ă  travers le Web : la personnalisation

    Get PDF
    National audienceIn the field of the Man-Machine Dialogue (DHM), the users need systems able to dialogue with them in a relevant and cooperative way, in n atural language (LN). Indeed, it is the machine which must adapt to the cognitive capacities of humans and not the opposite. However, the models of DHM are still limited by the lack of reference in the field. How can we obtain data to achieve a system which has not produced these data yet ? This problem of circularity limits the design and the cooperative data-processing system development adapted to the user's skill. Our method of collecting/gathering a corpus on the World Wide Web (the HALPIN system), m ore flexible and really interactive between Man and Machine, was developed on the basis of concept recognition in speech, and should help us to model the DHM, and to avoid the side-effects of a Wizard of Oz or electronic message protocol (Minitel).Dans le domaine du Dialogue Homme-Machine (DHM), les utilisateurs attendent que les systèmes proposés soient capables de dialoguer avec eux de manière pertinente et coopérative, en langue naturelle (LN). En effet, c'est bien la machine qui doit s'adapter aux capacités cognitives de l'homme et non l'inverse. Cependant, les modèles de DHM sont encore limités par le manque de référence dans le domaine. Comment disposer de données pour réaliser un système qui n'a pas encore produit ces données ? C e problème de circularité limite la conception et le développement de systèmes informatiques coopératifs adaptés aux compétences des utilisateurs. Notre méthode de recueil de corpus sur le World Wide Web (le système HALPIN), plus souple et réellement interactive entre l'Homme et la Machine, a été développée sur la base de reconnaissance de concepts dans le discours, et devrait permettre de mieux modéliser le DHM, et d'éviter les biais induits en protocole du type Magicien d'Oz ou messagerie électronique (Minitel)

    X-CAMPUS : démarche et outils pour une assistance proactive contextuelle

    Get PDF
    International audienceThe contribution of new technologies in the development of the computing gave a rise to news domains of researches such as the Ambient Intelligent (AmI), where the new technologies are used as tools to assist users in their daily tasks. Our study mainly the proactive assistance and the interaction human-machine; we study the contextual proactive assistance in order to design an adaptive system able to determine the needs of user wishing have a virtual assistant. We are interested in the contextual proactive interaction between a human and a conversational agent, that we have named X-CAMPUS. X-CAMPUS stands for eXtensible Conversational Agent for Mul- tichannel Proactive Ubiquitous Services. It aims to assist user in his daily tasks thanks to its ability to perceive the state of the environment and interact effectively according to the user's needs. We show that according to the context, X-CAMPUS notifies the user via personalized messages (e.g., suggestion of restaurants according to their menus and the users' preferences) and also according to the most appropriate channel (instant messaging, e-mail and SMS).La contribution des nouvelles technologies au développement de l'informatique a donné naissance à de nouveaux domaines de recherches tel que celui de l'Intelligence Ambiante (IAm), où les nouvelles technologies sont utilisées comme des outils pour assister les utilisateurs dans leurs tâches quotidiennes. Notre étude porte principalement sur l'assistance proactive et l'interaction homme-machine ; nous étudions l'assistance proactive contextuelle dont le but est de concevoir un système adaptatif capable de déterminer les besoins de l'utilisateur souhaitant disposer d'un assistant virtuel. Nous nous sommes intéressés à l'interaction proactive contextuelle, dans le cadre d'un dialogue entre un humain et un agent conversationnel en ligne nommé X-CAMPUS. X-CAMPUS est l'acronyme de eXtensible Conversational Agent for Multichannel Proactive Ubiquitous Services. Il a pour vocation d'assister l'utilisateur dans ses tâches quotidiennes grâce à sa capacité de percevoir l'état de l'environnement et d'interagir pertinemment selon les besoins de l'utilisateur. Nous montrons que selon le contexte, l'agent notifie l'utilisateur avec des messages personnalisés (ex : suggestion de restaurants selon leurs menus, en accord avec les préférences de l'utilisateur) via le canal le plus approprié (messagerie instantané, e-mail et SMS)

    Approche IDM pour le développement d'applications mobiles multimodales

    Get PDF
    International audienceL'informatique mobile propose différentes façons pour interagir avec les utilisateurs. La variété des capteurs qui équipent les périphériques mobiles (smartphone, tablette, ...) avec leurs modalités associées représente un écosystème important pour tester et valider les propositions scientifiques concernant la spécification des interactions multimodales. Dans cet article et en s'inspirant des travaux précédents de modélisation et de développement des applications multimodales, nous présentons une approche IDM (Ingénierie Dirigée par les Modèles) pour modéliser et générer des applications mobiles multimodales

    Proactive Assistance Within Ambient Environment. Towards intelligent agent server that anticipate and provide users' needs.

    Get PDF
    International audienceUser needs are expanding and becoming more and more complex with the emergence of newly adopted technologies. As a result, the convergence of smart devices, having the capability to communicate as well as sharing information and ensuring user need satisfaction, leads to profoundly change the way we interact with our environment. They should provide an adaptive assistance in both reactive and proactive mode and new communication methods focused on multimodal and multichannel interfaces. However, most of existing context-aware systems have extremely tight coupling between applications' semantic and sensor's details. So, the objective of our research is to implement an approach which can support the ability to reuse sensors and to evolve existing applications to use new context types. In this paper, we illustrate our approach for proactive intelligent assistance and we describe our architecture based on three principal layers. These layers are designed in order to build applications which can increase the welfare of the user situated in intelligent environment

    Supporting Mobile Connectivity: from Learning Scenarios to Multichannel Devices: Special Issue on "Learning as a Ubiquitous and Continuous Communication Attitude"

    Get PDF
    Guest Editor: Piet KommersInternational audienceThe introduction of distance learning does not only bring a wider audience, but also much more diversity among the learners: first, because it can be integrated more easily into a Life-long Learning strategy; secondly, because the learners are not restricted to a sing le area and thus learners from different countries and with different cultures follow the curriculum. We have observed this in various DL diplomas in which we participate. In this article, we will shed some light on the difficulties and challenges arising from these multi-cultural settings. Based on our research work, we would like to insist on two particular points which are the necessity to adapt the pedagogical settings (e.g. pedagogical scenarios) according to the learners' behaviour to overcome unforeseen problems due to cultural differences and the importance of considering mobile technologies to overcome limited access to the technology in developing countries and to ensure continuous interaction among learners and with tutors

    Hybrid BCI Coupling EEG and EMG for Severe Motor Disabilities

    Get PDF
    AbstractIn this paper, we are studying hybrid Brain-Computer Interfaces (BCI) coupling joystick data, electroencephalogram (EEG – electrical activity of the brain) and electromyogram (EMG – electrical activity of muscles) activities for severe motor disabilities. We are focusing our study on muscular activity as a control modality to interact with an application. We present our data processing and classification technique to detect right and left hand movements. EMG modality is well adapted for DMD patients, because less strength is needed to detect movements in contrast to conventional interfaces like joysticks. Across virtual reality tools, we believe that users will be more able to understand how to interact with such kind of interactive systems. This first part of our study report some very good results concerning the detection of hand movements, according to muscular channel, on healthy subjects

    From Metamodeling to Automatic Generation of Multimodal Interfaces for Ambient Computing

    Get PDF
    International audiencehis paper presents our approach to design multichannel and multimodal applications as part of ambient intelligence. Computers are increasingly present in our environments, whether at work (computers, photocopiers), at home (video player, hi-fi, microwave), in our cars, etc. They are more adaptable and context-sensitive (e.g., the car radio that lowers the volume when the mobile phone rings). Unfortunately, while they should provide smart services by combining their skills, they are not yet designed to communicate together. Our results, mainly based on the use of a software bus and a workflow, show that different devices (such as Wiimote, multi-touch screen, telephone, etc.) can be coordinated in order to activate real things (such as lamp, fan, robot, webcam, etc.). A smart digital home case study illustrates how using our approach to design with ease some parts of the ambient system and to redesign them during runtime
    • …
    corecore